38 research outputs found

    Evolution of Discrete Dynamical Systems

    Get PDF
    We investigate the evolution of three different types of discrete dynamical systems. In each case simple local rules are shown to yield interesting collective global behavior. (a) We introduce a mechanism for the evolution of growing small world networks. We demonstrate that purely local connection rules, when coupled with network growth, can result in short path lengths for the network as a whole. (b) We consider the general character of the spatial distributions of populations that grow through reproduction and subsequent local resettlement of new population members. Several simple one and two-dimensional point placement models are presented to illustrate possible generic behavior of these distributions. We show, both numerically and analytically, that all of the models lead to multifractal spatial distributions of population. (c) We present a discrete lattice model to investigate the segregation of three species granular mixtures in horizontally rotating cylinders. We demonstrate that the simple local rules of the model are able to reproduce many of the experimentally observed global phenomena

    A portfolio approach to massively parallel Bayesian optimization

    Full text link
    One way to reduce the time of conducting optimization studies is to evaluate designs in parallel rather than just one-at-a-time. For expensive-to-evaluate black-boxes, batch versions of Bayesian optimization have been proposed. They work by building a surrogate model of the black-box that can be used to select the designs to evaluate efficiently via an infill criterion. Still, with higher levels of parallelization becoming available, the strategies that work for a few tens of parallel evaluations become limiting, in particular due to the complexity of selecting more evaluations. It is even more crucial when the black-box is noisy, necessitating more evaluations as well as repeating experiments. Here we propose a scalable strategy that can keep up with massive batching natively, focused on the exploration/exploitation trade-off and a portfolio allocation. We compare the approach with related methods on deterministic and noisy functions, for mono and multiobjective optimization tasks. These experiments show similar or better performance than existing methods, while being orders of magnitude faster

    Modeling the X-ray - UV Correlations in NGC 7469

    Get PDF
    We model the correlated X-ray - UV observations of NGC 7469, for which well sampled data in both these bands have been obtained recently in a multiwavelength monitoring campaign. To this end we derive the transfer function in wavelength \ls and time lag \t, for reprocessing hard (X-ray) photons from a point source to softer ones (UV-optical) by an infinite plane (representing a cool, thin accretion disk) located at a given distance below the X-ray source, under the assumption that the X-ray flux is absorbed and emitted locally by the disk as a black body of temperature appropriate to the incident flux. Using the observed X-ray light curve as input we have computed the expected continuum UV emission as a function of time at several wavelengths (\l \l 1315 \AA, \l \l 6962 \AA, \l \l 15000 \AA, \l \l 30000 \AA) assuming that the X-ray source is located one \sc radius above the disk plane, with the mass of the black hole MM and the latitude angle θ\theta of the observer relative to the disk plane as free parameters. We have searched the parameter space of black hole masses and observer azimuthal angles but we were unable to reproduce UV light curves which would resemble, even remotely, those observed. We also explored whether particular combinations of the values of these parameters could lead to light curves whose statistical properties (i.e. the autocorrelation and cross correlation functions) would match those corresponding to the observed UV light curve at \l \l 1315 \AA. Even though we considered black hole masses as large as 10910^9 M⊙_{\odot} no such match was possible. Our results indicate that some of the fundamental assumptions of this model will have to be modified to obtain even approximate agreement between the observed and model X-ray - UV light curves.Comment: 16 pages, 13 figures, ApJ in pres

    A portfolio approach to massively parallel Bayesian optimization

    Get PDF
    One way to reduce the time of conducting optimization studies is to evaluate designs in parallel rather than just one-at-a-time. For expensive-to-evaluate black-boxes, batch versions of Bayesian optimization have been proposed. They work by building a surrogate model of the black-box that can be used to select the designs to evaluate efficiently via an infill criterion. Still, with higher levels of parallelization becoming available, the strategies that work for a few tens of parallel evaluations become limiting, in particular due to the complexity of selecting more evaluations. It is even more crucial when the black-box is noisy, necessitating more evaluations as well as repeating experiments. Here we propose a scalable strategy that can keep up with massive batching natively, focused on the exploration/exploitation trade-off and a portfolio allocation. We compare the approach with related methods on deterministic and noisy functions, for mono and multi-objective optimization tasks. These experiments show similar or better performance than existing methods, while being orders of magnitude faster

    Characterization and valuation of uncertainty of calibrated parameters in stochastic decision models

    Full text link
    We evaluated the implications of different approaches to characterize uncertainty of calibrated parameters of stochastic decision models (DMs) in the quantified value of such uncertainty in decision making. We used a microsimulation DM of colorectal cancer (CRC) screening to conduct a cost-effectiveness analysis (CEA) of a 10-year colonoscopy screening. We calibrated the natural history model of CRC to epidemiological data with different degrees of uncertainty and obtained the joint posterior distribution of the parameters using a Bayesian approach. We conducted a probabilistic sensitivity analysis (PSA) on all the model parameters with different characterizations of uncertainty of the calibrated parameters and estimated the value of uncertainty of the different characterizations with a value of information analysis. All analyses were conducted using high performance computing resources running the Extreme-scale Model Exploration with Swift (EMEWS) framework. The posterior distribution had high correlation among some parameters. The parameters of the Weibull hazard function for the age of onset of adenomas had the highest posterior correlation of -0.958. Considering full posterior distributions and the maximum-a-posteriori estimate of the calibrated parameters, there is little difference on the spread of the distribution of the CEA outcomes with a similar expected value of perfect information (EVPI) of \$653 and \$685, respectively, at a WTP of \$66,000/QALY. Ignoring correlation on the posterior distribution of the calibrated parameters, produced the widest distribution of CEA outcomes and the highest EVPI of \$809 at the same WTP. Different characterizations of uncertainty of calibrated parameters have implications on the expect value of reducing uncertainty on the CEA. Ignoring inherent correlation among calibrated parameters on a PSA overestimates the value of uncertainty.Comment: 17 pages, 6 figures, 3 table

    Trajectory-oriented optimization of stochastic epidemiological models

    Full text link
    Epidemiological models must be calibrated to ground truth for downstream tasks such as producing forward projections or running what-if scenarios. The meaning of calibration changes in case of a stochastic model since output from such a model is generally described via an ensemble or a distribution. Each member of the ensemble is usually mapped to a random number seed (explicitly or implicitly). With the goal of finding not only the input parameter settings but also the random seeds that are consistent with the ground truth, we propose a class of Gaussian process (GP) surrogates along with an optimization strategy based on Thompson sampling. This Trajectory Oriented Optimization (TOO) approach produces actual trajectories close to the empirical observations instead of a set of parameter settings where only the mean simulation behavior matches with the ground truth

    Modeling hepatitis C micro-elimination among people who inject drugs with direct-acting antivirals in metropolitan Chicago

    Get PDF
    Hepatitis C virus (HCV) infection is a leading cause of chronic liver disease and mortality worldwide. Direct-acting antiviral (DAA) therapy leads to high cure rates. However, persons who inject drugs (PWID) are at risk for reinfection after cure and may require multiple DAA treatments to reach the World Health Organization’s (WHO) goal of HCV elimination by 2030. Using an agent-based model (ABM) that accounts for the complex interplay of demographic factors, risk behaviors, social networks, and geographic location for HCV transmission among PWID, we examined the combination(s) of DAA enrollment (2.5%, 5%, 7.5%, 10%), adherence (60%, 70%, 80%, 90%) and frequency of DAA treatment courses needed to achieve the WHO’s goal of reducing incident chronic infections by 90% by 2030 among a large population of PWID from Chicago, IL and surrounding suburbs. We also estimated the economic DAA costs associated with each scenario. Our results indicate that a DAA treatment rate of >7.5% per year with 90% adherence results in 75% of enrolled PWID requiring only a single DAA course; however 19% would require 2 courses, 5%, 3 courses and <2%, 4 courses, with an overall DAA cost of $325 million to achieve the WHO goal in metropolitan Chicago. We estimate a 28% increase in the overall DAA cost under low adherence (70%) compared to high adherence (90%). Our modeling results have important public health implications for HCV elimination among U.S. PWID. Using a range of feasible treatment enrollment and adherence rates, we report robust findings supporting the need to address re-exposure and reinfection among PWID to reduce HCV incidence

    Emulator-based Bayesian calibration of the CISNET colorectal cancer models

    Get PDF
    PURPOSE: To calibrate Cancer Intervention and Surveillance Modeling Network (CISNET) 's SimCRC, MISCAN-Colon, and CRC-SPIN simulation models of the natural history colorectal cancer (CRC) with an emulator-based Bayesian algorithm and internally validate the model-predicted outcomes to calibration targets.METHODS: We used Latin hypercube sampling to sample up to 50,000 parameter sets for each CISNET-CRC model and generated the corresponding outputs. We trained multilayer perceptron artificial neural networks (ANN) as emulators using the input and output samples for each CISNET-CRC model. We selected ANN structures with corresponding hyperparameters (i.e., number of hidden layers, nodes, activation functions, epochs, and optimizer) that minimize the predicted mean square error on the validation sample. We implemented the ANN emulators in a probabilistic programming language and calibrated the input parameters with Hamiltonian Monte Carlo-based algorithms to obtain the joint posterior distributions of the CISNET-CRC models' parameters. We internally validated each calibrated emulator by comparing the model-predicted posterior outputs against the calibration targets.RESULTS: The optimal ANN for SimCRC had four hidden layers and 360 hidden nodes, MISCAN-Colon had 4 hidden layers and 114 hidden nodes, and CRC-SPIN had one hidden layer and 140 hidden nodes. The total time for training and calibrating the emulators was 7.3, 4.0, and 0.66 hours for SimCRC, MISCAN-Colon, and CRC-SPIN, respectively. The mean of the model-predicted outputs fell within the 95% confidence intervals of the calibration targets in 98 of 110 for SimCRC, 65 of 93 for MISCAN, and 31 of 41 targets for CRC-SPIN.CONCLUSIONS: Using ANN emulators is a practical solution to reduce the computational burden and complexity for Bayesian calibration of individual-level simulation models used for policy analysis, like the CISNET CRC models.</p
    corecore